One of the key challenges in deploying RL to real-world applications is to adapt to variations of unknown environment contexts, such as changing terrains in robotic tasks and fluctuated bandwidth in congestion control. Existing works on adaptation to unknown environment contexts either assume the contexts are the same for the whole episode or assume the context variables are Markovian. However, in many real-world applications, the environment context usually stays stable for a stochastic period and then changes in an abrupt and unpredictable manner within an episode, resulting in a segment structure, which existing works fail to address. To leverage the segment structure of piecewise stable context in real-world applications, in this paper, we propose a \textit{\textbf{Se}gmented \textbf{C}ontext \textbf{B}elief \textbf{A}ugmented \textbf{D}eep~(SeCBAD)} RL method. Our method can jointly infer the belief distribution over latent context with the posterior over segment length and perform more accurate belief context inference with observed data within the current context segment. The inferred belief context can be leveraged to augment the state, leading to a policy that can adapt to abrupt variations in context. We demonstrate empirically that SeCBAD can infer context segment length accurately and outperform existing methods on a toy grid world environment and Mujuco tasks with piecewise-stable context.
translated by 谷歌翻译
Safety comes first in many real-world applications involving autonomous agents. Despite a large number of reinforcement learning (RL) methods focusing on safety-critical tasks, there is still a lack of high-quality evaluation of those algorithms that adheres to safety constraints at each decision step under complex and unknown dynamics. In this paper, we revisit prior work in this scope from the perspective of state-wise safe RL and categorize them as projection-based, recovery-based, and optimization-based approaches, respectively. Furthermore, we propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection. This novel technique explicitly enforces hard constraints via the deep unrolling architecture and enjoys structural advantages in navigating the trade-off between reward improvement and constraint satisfaction. To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit, a toolkit that provides off-the-shelf interfaces and evaluation utilities for safety-critical tasks. We then perform a comparative study of the involved algorithms on six benchmarks ranging from robotic control to autonomous driving. The empirical results provide an insight into their applicability and robustness in learning zero-cost-return policies without task-dependent handcrafting. The project page is available at https://sites.google.com/view/saferlkit.
translated by 谷歌翻译
We consider an offline reinforcement learning (RL) setting where the agent need to learn from a dataset collected by rolling out multiple behavior policies. There are two challenges for this setting: 1) The optimal trade-off between optimizing the RL signal and the behavior cloning (BC) signal changes on different states due to the variation of the action coverage induced by different behavior policies. Previous methods fail to handle this by only controlling the global trade-off. 2) For a given state, the action distribution generated by different behavior policies may have multiple modes. The BC regularizers in many previous methods are mean-seeking, resulting in policies that select out-of-distribution (OOD) actions in the middle of the modes. In this paper, we address both challenges by using adaptively weighted reverse Kullback-Leibler (KL) divergence as the BC regularizer based on the TD3 algorithm. Our method not only trades off the RL and BC signals with per-state weights (i.e., strong BC regularization on the states with narrow action coverage, and vice versa) but also avoids selecting OOD actions thanks to the mode-seeking property of reverse KL. Empirically, our algorithm can outperform existing offline RL algorithms in the MuJoCo locomotion tasks with the standard D4RL datasets as well as the mixed datasets that combine the standard datasets.
translated by 谷歌翻译
Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
translated by 谷歌翻译
尽管模拟语义通信系统在文献中受到了很大的关注,但在数字语义通信系统上的工作较少。在本文中,我们开发了一个深度学习(DL)启用的矢量量化(VQ)语义通信系统,用于图像传输,名为VQ-Deepsc。具体而言,我们提出了一个基于卷积的神经网络(CNN)的收发器来提取图像的多尺度语义特征,并引入多尺度语义嵌入空间以执行语义特征量化,从而使数据与数字通信系统兼容。此外,我们通过引入Patchgan歧视者来采用对抗训练来提高接收图像的质量。实验结果表明,根据SSIM,所提出的VQ-Deepsc优于传统图像传输方法。
translated by 谷歌翻译
除了在经典图像压缩编解码器上实现较高的压缩效率外,还可以通过其他侧面信息(例如,从同一场景的不同角度)改进深层图像压缩。为了更好地利用分布式压缩方案下的侧面信息,现有方法(Ayzik和Avidan 2020)仅在图像域上实现匹配的补丁,以解决由查看点差异引起的视差问题。但是,在图像域上匹配的补丁匹配对由不同的视角引起的比例,形状和照明的差异并不强大,也无法充分利用侧面信息图像的丰富纹理信息。为了解决此问题,我们建议在分布式图像压缩模型的解码器上充分利用多尺度特征域贴片匹配(MSFDPM)。具体而言,MSFDPM由侧面信息特征提取器,多尺度特征域补丁匹配模块和多尺度特征融合网络组成。此外,我们重复使用从浅层层进行斑点相关性,以加速深层的贴片匹配。最后,我们认为,与图像域(Ayzik和Avidan 2020)的贴片匹配方法相比,在多尺度特征域中的匹配进一步提高了压缩率约20%。
translated by 谷歌翻译
人通常通过按音乐形式组织元素来表达音乐思想来创作音乐。但是,对于基于神经网络的音乐生成,由于缺乏音乐形式的标签数据,很难这样做。在本文中,我们开发了Meloform,该系统是使用专家系统和神经网络以音乐形式生成旋律的系统。具体而言,1)我们设计了一个专家系统,可以通过开发从图案到短语的音乐元素到并根据预授予的音乐形式进行重复和变化的部分来生成旋律; 2)考虑到产生的旋律缺乏音乐丰富性,我们设计了一个基于变压器的改进模型,以改善旋律而不改变其音乐形式。 Meloform享有专家系统和通过神经模型的音乐丰富性学习的精确音乐形式控制的优势。主观和客观的实验评估都表明,MeloForm以97.79%的精度生成具有精确的音乐形式控制的旋律,并且在主观评估评分方面的表现优于基线系统0.75、0.50、0.50、0.86和0.89,其结构,主题,丰富性和整体质量和整体质量无需主观评估,而没有主观评估。任何标记的音乐形式数据。此外,Meloform可以支持各种形式,例如诗歌和合唱形式,隆多形式,变异形式,奏鸣曲形式,等等。
translated by 谷歌翻译
歌词到融合的生成是歌曲创作的重要任务,并且由于其独特的特征也很具有挑战性:产生的旋律不仅应遵循良好的音乐模式,而且还应与节奏和结构等歌词中的功能保持一致。由于几个问题,这些特征无法通过以端到端学习抒情式映射的神经生成模型来很好地处理:(1)缺乏对齐的抒情式摩托律训练数据,以充分学习抒情液特征结盟; (2)发电中缺乏可控性,无法明确保证抒情特征对齐。在本文中,我们提出了ROC,这是一种新的抒情术的范式,该范式通过一代网络式管道解决了上述问题。具体而言,我们的范式有两个阶段:(1)创建阶段,其中大量音乐是由基于神经的旋律语言模型生成的,并通过几个关键功能(例如和弦,音调,节奏和节奏和节奏)在数据库中索引。结构信息,包括合唱或经文); (2)重新创建阶段,根据歌词的关键功能从数据库中检索音乐作品,并根据构图指南和旋律语言模型分数从数据库中检索音乐作品来重新创建旋律。我们的ROC范式具有多个优点:(1)它只需要未配对的旋律数据来训练旋律语言模型,而不是以前模型中配对的抒情数据。 (2)它在抒情循环的生成中实现了良好的抒情式特征对齐。关于英语和中文数据集的实验表明,ROC在客观和主观指标上都优于先前基于神经的抒情性循环模型。
translated by 谷歌翻译
深度强化学习(DRL)在自动游戏测试中引起了很多关注。早期尝试依靠游戏内部信息进行游戏空间探索,因此需要与游戏深入集成,这对于实际应用来说是不便的。在这项工作中,我们建议仅使用屏幕截图/像素作为自动游戏测试的输入,并建立了一般游戏测试代理Inspector,可以轻松地将其应用于不同的游戏,而无需与游戏深入集成。除了覆盖所有游戏测试空间外,我们的代理商还试图采取类似人类的行为与游戏中的关键对象进行交互,因为某些错误通常发生在玩家对象的交互中。检查器基于纯粹的像素输入,包括三个关键模块:游戏空间探索器,关键对象检测器和类似人类的对象研究者。 Game Space Explorer旨在通过使用像素输入的基于好奇心的奖励功能来探索整个游戏空间。关键对象检测器的目的是基于少量标记的屏幕快照在游戏中检测关键对象。类似人类的对象研究者的目标是模仿人类的行为,以通过模仿学习来调查关键对象。我们在两个受欢迎的视频游戏中进行实验:射击游戏和动作RPG游戏。实验结果证明了检查员在探索游戏空间,检测关键对象和调查对象方面的有效性。此外,检查员在这两场比赛中成功发现了两个潜在的错误。检查员的演示视频可从https://github.com/inspector-gametesting/inspector-gametesting获得。
translated by 谷歌翻译
由于其有条件的独立性假设,非自动回忆翻译(NAT)模型很难捕获目标翻译的多模式分布,这被称为“多模式性问题”,包括词汇多模式和句法。多模式。虽然对第一个进行了充分的研究,但句法多模式性为NAT的标准横熵(XE)损失带来了严重的挑战,并且正在研究中。在本文中,我们对句法多模式问题进行了系统研究。具体而言,我们将其分解为短期和远程句法多模式,并在精心设计的合成数据集和真实数据集上评估了具有高级损耗函数的几种NAT算法。我们发现,连接派时间分类(CTC)损失和订单不合时宜的熵(OAXE)损失可以更好地处理短期和远程语法多模式。此外,我们将同时掌握并设计新的损失功能,以更好地处理现实世界数据集中复杂的句法多模式。为了促进实际用法,我们提供了一个指南,以使用不同种类的句法多模式的不同损失功能。
translated by 谷歌翻译